228 research outputs found

    Substitutional reality:using the physical environment to design virtual reality experiences

    Get PDF
    Experiencing Virtual Reality in domestic and other uncontrolled settings is challenging due to the presence of physical objects and furniture that are not usually defined in the Virtual Environment. To address this challenge, we explore the concept of Substitutional Reality in the context of Virtual Reality: a class of Virtual Environments where every physical object surrounding a user is paired, with some degree of discrepancy, to a virtual counterpart. We present a model of potential substitutions and validate it in two user studies. In the first study we investigated factors that affect participants' suspension of disbelief and ease of use. We systematically altered the virtual representation of a physical object and recorded responses from 20 participants. The second study investigated users' levels of engagement as the physical proxy for a virtual object varied. From the results, we derive a set of guidelines for the design of future Substitutional Reality experiences

    The body language of fear:fearful nonverbal signals in survival-horror games

    Get PDF
    In this paper, we present an exploration of players’ nonverbal body expressions when playing survival-horror games. We compared physiological signals and body expressions of 16 participants playing two games: a survival-horror game (Slender: The Eight Pages) and a custom-built baseline game with the same map and controls (Treasure Hunt). We show that the hard fun style of Survival-Horror games makes full body expressions an unsuitable modality for affect recognition, but scary game events are clearly expressed by their physiological signals

    Gaze-supported gaming: MAGIC techniques for first person shooters

    Get PDF
    MAGIC--Manual And Gaze Input Cascaded-pointing techniques have been proposed as an efficient way in which the eyes can support the mouse input in pointing tasks. MAGIC Sense is one of such techniques in which the cursor speed is modulated by how far it is from the gaze point. In this work, we implemented a continuous and a discrete adaptations of MAGIC Sense for First-Person Shooter input. We evaluated the performance of these techniques in an experiment with 15 participants and found no significant gain in performance, but moderate user preference for the discrete technique

    An empirical characterization of touch-gesture input force on mobile devices

    Get PDF
    Designers of force-sensitive user interfaces lack a ground-truth characterization of input force while performing common touch gestures (zooming, panning, tapping, and rotating). This paper provides such a characterization firstly by deriving baseline force profiles in a tightly-controlled user study; then by examining how these profiles vary in different conditions such as form factor (mobile phone and tablet), interaction position (walking and sitting) and urgency (timed tasks and untimed tasks). We conducted two user studies with 14 and 24 participants respectively and report: (1) force profile graphs that depict the force variations of common touch gestures, (2) the effect of the different conditions on force exerted and gesture completion time, (3) the most common forces that users apply, and the time taken to complete the gestures. This characterization is intended to aid the design of interactive devices that integrate force-input with common touch gestures in different conditions

    Interactions under the desk: a characterisation of foot movements for input in a seated position

    Get PDF
    We characterise foot movements as input for seated users. First, we built unconstrained foot pointing performance models in a seated desktop setting using ISO 9241-9-compliant Fitts’s Law tasks. Second, we evaluated the effect of the foot and direction in one-dimensional tasks, finding no effect of the foot used, but a significant effect of the direction in which targets are distributed. Third, we compared one foot against two feet to control two variables, finding that while one foot is better suited for tasks with a spatial representation that matches its movement, there is little difference between the techniques when it does not. Fourth, we analysed the overhead caused by introducing a feet-controlled variable in a mouse task, finding the feet to be comparable to the scroll wheel. Our results show the feet are an effective method of enhancing our interaction with desktop systems and derive a series of design guidelines

    Feet movement in desktop 3D interaction

    Get PDF
    In this paper we present an exploratory work on the use of foot movements to support fundamental 3D interaction tasks. Depth cameras such as the Microsoft Kinect are now able to track users' motion unobtrusively, making it possible to draw on the spatial context of gestures and movements to control 3D UIs. Whereas multitouch and mid-air hand gestures have been explored extensively for this purpose, little work has looked at how the same can be accomplished with the feet. We describe the interaction space of foot movements in a seated position and propose applications for such techniques in three-dimensional navigation, selection, manipulation and system control tasks in a 3D modelling context. We explore these applications in a user study and discuss the advantages and disadvantages of this modality for 3D UIs

    The use of lexical complexity for assessing difficulty in instructional videos

    Get PDF
    Although measures of lexical complexity are well established for printed texts, there is currently no equivalent work for videos. This study, therefore, aims to investigate whether existing lexical complexity measures can be extended to predict second language (L2) learners’ judgment of video difficulty. Using a corpus of 320 instructional videos, regression models were developed for explaining and predicting difficulty using indices of lexical sophistication, density, and diversity. Results of the study confirm key dimensions of lexical complexity in estimates of video difficulty. In particular, lexical frequency indices accounted for the largest variance in the assessment of video difficulty (R2 = .45). We conclude with implications for CALL and suggest areas of further research

    An empirical investigation of gaze selection in mid-air gestural 3D manipulation

    Get PDF
    In this work, we investigate gaze selection in the context of mid-air hand gestural manipulation of 3D rigid bodies on monoscopic displays. We present the results of a user study with 12 participants in which we compared the performance of Gaze, a Raycasting technique (2D Cursor) and a Virtual Hand technique (3D Cursor) to select objects in two 3D mid-air interaction tasks. Also, we compared selection confirmation times for Gaze selection when selection is followed by manipulation to when it is not. Our results show that gaze selection is faster and more preferred than 2D and 3D mid-air-controlled cursors, and is particularly well suited for tasks in which users constantly switch between several objects during the manipulation. Further, selection confirmation times are longer when selection is followed by manipulation than when it is not

    The feet in human--computer interaction: a survey of foot-based interaction

    Get PDF
    Foot-operated computer interfaces have been studied since the inception of human--computer interaction. Thanks to the miniaturisation and decreasing cost of sensing technology, there is an increasing interest exploring this alternative input modality, but no comprehensive overview of its research landscape. In this survey, we review the literature on interfaces operated by the lower limbs. We investigate the characteristics of users and how they affect the design of such interfaces. Next, we describe and analyse foot-based research prototypes and commercial systems in how they capture input and provide feedback. We then analyse the interactions between users and systems from the perspective of the actions performed in these interactions. Finally, we discuss our findings and use them to identify open questions and directions for future research
    • 

    corecore